An adaptive gradient sampling algorithm for non-smooth optimization
نویسندگان
چکیده
We present an algorithm for the minimization of f : Rn → R, assumed to be locally Lipschitz and continuously differentiable in an open dense subset D of Rn. The objective f may be nonsmooth and/or nonconvex. The method is based on the gradient sampling algorithm (GS) of Burke, Lewis, and Overton [SIAM J. Optim., 15 (2005), pp. 751-779]. It differs, however, from previously proposed versions of GS in that it is variable-metric and only O(1) gradient evaluations are required per iteration. Numerical experiments illustrate that the algorithm is much more efficient than GS in that it consistently requires significantly fewer gradient evaluations. In addition, the adaptive sampling procedure allows for warm-starting of the quadratic subproblem solver so that the number of subproblem iterations per nonlinear iteration is also reduced. Global convergence of the algorithm is proved assuming that the Hessian approximations are positive definite and bounded, an assumption shown to be true for the proposed Hessian approximation updating strategies.
منابع مشابه
Adaptive Sampling Probabilities for Non-Smooth Optimization
Abstract Standard forms of coordinate and stochastic gradient methods do not adapt to structure in data; their good behavior under random sampling is predicated on uniformity in data. When gradients in certain blocks of features (for coordinate descent) or examples (for SGD) are larger than others, there is a natural structure that can be exploited for quicker convergence. Yet adaptive variants...
متن کاملA SOLUTION TO AN ECONOMIC DISPATCH PROBLEM BY A FUZZY ADAPTIVE GENETIC ALGORITHM
In practice, obtaining the global optimum for the economic dispatch {bf (ED)}problem with ramp rate limits and prohibited operating zones is presents difficulties. This paper presents a new andefficient method for solving the economic dispatch problem with non-smooth cost functions using aFuzzy Adaptive Genetic Algorithm (FAGA). The proposed algorithm deals with the issue ofcontrolling the ex...
متن کاملThe Adaptive Sampling Gradient Method Optimizing Smooth Functions with an Inexact Oracle
Consider settings such as stochastic optimization where a smooth objective function f is unknown but can be estimated with an inexact oracle such as quasi-Monte Carlo (QMC) or numerical quadrature. The inexact oracle is assumed to yield function estimates having error that decays with increasing oracle effort. For solving such problems, we present the Adaptive Sampling Gradient Method (ASGM) in...
متن کاملRiemannian Optimization for High-Dimensional Tensor Completion
Tensor completion aims to reconstruct a high-dimensional data set with a large fraction of missing entries. The assumption of low-rank structure in the underlying original data allows us to cast the completion problem into an optimization problem restricted to the manifold of fixed-rank tensors. Elements of this smooth embedded submanifold can be efficiently represented in the tensor train (TT)...
متن کاملAdaptive Probabilities in Stochastic Optimization Algorithms
Stochastic optimization methods have been extensively studied in recent years. In some classification scenarios such as text document categorization, unbiased methods such as uniform sampling have negative effects on the convergence rate, because of the effects of the potential outlier data points on the estimator. Consequently, it would take more iterations to converge to the optimal value for...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Optimization Methods and Software
دوره 28 شماره
صفحات -
تاریخ انتشار 2013